language gan
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.78)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.78)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.69)
To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs
Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability.In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.
ColdGANs: Taming Language GANs with Cautious Sampling Strategies
Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences that lack of coherence, factualness, and are prone to repetitions. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias. Another problem lies in considering only the reference text as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) could mitigate those limitations. Nonetheless, the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to under-perform MLE.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.59)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.59)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.59)
- Africa > Gabon (0.67)
- Africa > South Africa (0.25)
- North America > United States > New York (0.09)
- (4 more...)
Review for NeurIPS paper: ColdGANs: Taming Language GANs with Cautious Sampling Strategies
While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are needed. If you get rid of pretraining/initialization from T5/BART, would this method work? 2. This work requires MLE pretraining, while prior work "Training Language GANs from Scratch" does not. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum.
Review for NeurIPS paper: ColdGANs: Taming Language GANs with Cautious Sampling Strategies
This paper proposes a new method to stabilizing GAN for language generation. After author rebuttal and reviewer discussion, the scores are still divergent. By the end, it received 2 reject and 2 accept recommendations. On one hand, the main criticism about this paper lies in the existence of many hyper-parameters/tricks to tune. On the other hand, the reviewers appreciate the additional clarification and experiments in the rebuttal, and think this paper provides a careful and insightful analysis on text GANs.
To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs
Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability.In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.
ColdGANs: Taming Language GANs with Cautious Sampling Strategies
Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences that lack of coherence, factualness, and are prone to repetitions. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias. Another problem lies in considering only the reference text as correct, while in practice several alternative formulations could be as good. Generative Adversarial Networks (GANs) could mitigate those limitations. Nonetheless, the discrete nature of text has hindered their application to language generation: the approaches proposed so far, based on Reinforcement Learning, have been shown to under-perform MLE.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.62)
InitialGAN: A Language GAN with Completely Random Initialization
Text generative models trained via Maximum Likelihood Estimation (MLE) suffer from the notorious exposure bias problem, and Generative Adversarial Networks (GANs) are shown to have potential to tackle this problem. Existing language GANs adopt estimators like REINFORCE or continuous relaxations to model word probabilities. The inherent limitations of such estimators lead current models to rely on pre-training techniques (MLE pre-training or pre-trained embeddings). Representation modeling methods which are free from those limitations, however, are seldomly explored because of their poor performance in previous attempts. Our analyses reveal that invalid sampling methods and unhealthy gradients are the main contributors to such unsatisfactory performance. In this work, we present two techniques to tackle these problems: dropout sampling and fully normalized LSTM. Based on these two techniques, we propose InitialGAN whose parameters are randomly initialized in full. Besides, we introduce a new evaluation metric, Least Coverage Rate, to better evaluate the quality of generated samples. The experimental results demonstrate that InitialGAN outperforms both MLE and other compared models. To the best of our knowledge, it is the first time a language GAN can outperform MLE without using any pre-training techniques.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (19 more...)